2 research outputs found
Opportunistic Self Organizing Migrating Algorithm for Real-Time Dynamic Traveling Salesman Problem
Self Organizing Migrating Algorithm (SOMA) is a meta-heuristic algorithm
based on the self-organizing behavior of individuals in a simulated social
environment. SOMA performs iterative computations on a population of potential
solutions in the given search space to obtain an optimal solution. In this
paper, an Opportunistic Self Organizing Migrating Algorithm (OSOMA) has been
proposed that introduces a novel strategy to generate perturbations
effectively. This strategy allows the individual to span across more possible
solutions and thus, is able to produce better solutions. A comprehensive
analysis of OSOMA on multi-dimensional unconstrained benchmark test functions
is performed. OSOMA is then applied to solve real-time Dynamic Traveling
Salesman Problem (DTSP). The problem of real-time DTSP has been stipulated and
simulated using real-time data from Google Maps with a varying cost-metric
between any two cities. Although DTSP is a very common and intuitive model in
the real world, its presence in literature is still very limited. OSOMA
performs exceptionally well on the problems mentioned above. To substantiate
this claim, the performance of OSOMA is compared with SOMA, Differential
Evolution and Particle Swarm Optimization.Comment: 6 pages, published in CISS 201
Revisiting Pre-trained Language Models and their Evaluation for Arabic Natural Language Understanding
There is a growing body of work in recent years to develop pre-trained
language models (PLMs) for the Arabic language. This work concerns addressing
two major problems in existing Arabic PLMs which constraint progress of the
Arabic NLU and NLG fields.First, existing Arabic PLMs are not well-explored and
their pre-trainig can be improved significantly using a more methodical
approach. Second, there is a lack of systematic and reproducible evaluation of
these models in the literature. In this work, we revisit both the pre-training
and evaluation of Arabic PLMs. In terms of pre-training, we explore improving
Arabic LMs from three perspectives: quality of the pre-training data, size of
the model, and incorporating character-level information. As a result, we
release three new Arabic BERT-style models ( JABER, Char-JABER, and SABER), and
two T5-style models (AT5S and AT5B). In terms of evaluation, we conduct a
comprehensive empirical study to systematically evaluate the performance of
existing state-of-the-art models on ALUE that is a leaderboard-powered
benchmark for Arabic NLU tasks, and on a subset of the ARGEN benchmark for
Arabic NLG tasks. We show that our models significantly outperform existing
Arabic PLMs and achieve a new state-of-the-art performance on discriminative
and generative Arabic NLU and NLG tasks. Our models and source code to
reproduce of results will be made available shortly